Easy2Siksha.com
GNDU Question Paper-2024
BCA 5
th
Semester
OPERATING SYSTEM
Time Allowed: Three Hours Max. Marks: 75
Note: Attempt Five questions in all, selecting at least One question from each section. The
Fifth question may be attempted from any section. All questions carry equal marks.
SECTION-A
1. What is Distributed System? What are its Advantages?
2. What is the importance of CPU scheduling algorithm? Explain any two such algorithms.
SECTION-B
3. What is a Sernaphore? Explain its use with an example.
4. Explain the concept of swapping with diagram.
SECTION-C
5. Explain demand paging.
6. Why do we need disk and disk scheduling algorithm? What is disk reliability?
SECTION-D
Easy2Siksha.com
7. What is a deadlock? How does it occur? What can we do to handle it?
8. Explain the concept of deadlock avoidance.
GNDU Answer Paper-2024
BCA 5
th
Semester
OPERATING SYSTEM
Time Allowed: Three Hours Max. Marks: 75
Note: Attempt Five questions in all, selecting at least One question from each section. The
Fifth question may be attempted from any section. All questions carry equal marks.
SECTION-A
1. What is Distributed System? What are its Advantages?
Ans: Imagine you are organizing a huge school festival. There are hundreds of tasks to be
donedecorations, ticket sales, food stalls, and games. If only one person tries to do
everything, it would take ages, mistakes would happen, and chaos would erupt. But if you
divide tasks among several teams, each responsible for specific jobs and communicating
with each other, the festival becomes manageable, smooth, and efficient.
This is exactly how a Distributed System works in the world of computers. Instead of relying
on a single, powerful computer to do everything, a distributed system uses multiple
computers (or nodes) that work together as a single system. Each computer handles part of
the work, but they communicate and coordinate to achieve the common goal.
Definition of Distributed System
A Distributed System is a collection of independent computers that appear to the user as a
single coherent system. Even though the computers are physically separate and may be
located in different places, they are connected via a network and work together to perform
tasks.
In simpler terms, imagine a team of chefs working in different kitchens but collaborating to
prepare one large feast. To the guests, it seems like a single seamless kitchen is cooking
everything.
Easy2Siksha.com
Key Characteristics
1. Multiple Independent Computers Each node can work independently but
collaborates with others.
2. Transparency To the user, the system behaves like a single machine. They don’t
need to know which computer is handling which task.
3. Scalability You can add more computers to handle more work without disrupting
the system.
4. Fault Tolerance If one computer fails, others can continue the work, preventing
total system failure.
How It Works: An Everyday Analogy
Think about online shopping platforms like Amazon or Flipkart. When you order a product,
the system has to:
Check product availability in the warehouse.
Process payment.
Manage shipment through a delivery network.
Update inventory.
All these tasks happen simultaneously but are handled by different servers in a distributed
system. If one server fails (say the payment server), the others still function, and the failure
can be managed without crashing the entire system.
Advantages of Distributed Systems
Let’s return to our festival example to understand the advantages in a simple, relatable
way:
1. Resource Sharing
Just like each team in the festival shares tools, computers in a distributed system can
share resources like files, printers, and processing power. This avoids duplication and
improves efficiency.
2. Scalability
Suppose more guests arrive at the festival. You can simply add more teams to
manage the extra crowd. Similarly, in a distributed system, new computers can be
added to handle more workload without disrupting existing operations.
3. Reliability and Fault Tolerance
If the ticket counter team is absent one day, other teams can handle the tickets. In a
distributed system, if one node crashes, others take over the tasks, ensuring
continuous operation.
4. Performance and Speed
Tasks are divided among multiple computers, so the work is done faster. Think of
multiple chefs preparing different dishes simultaneously—it’s much quicker than one
chef doing everything alone.
Easy2Siksha.com
5. Flexibility
Distributed systems allow heterogeneous computers (different types and capacities)
to work together. It’s like having both experienced and novice volunteers working in
harmony, each contributing according to their abilities.
6. Cost-Effective
Instead of buying one super-expensive supercomputer, you can use multiple
ordinary computers connected together. It’s similar to hiring several volunteers
instead of a single professional team—it’s cheaper and just as effective.
7. Improved Data Sharing and Collaboration
People from different locations can access shared resources simultaneously. In the
festival, imagine coordinators at different locations updating a shared schedule in
real-timeeveryone stays informed.
A Simple Diagram of a Distributed System
Here’s a conceptual diagram you can draw in your exam:
Explanation:
Clients interact with the system.
Nodes are independent computers performing tasks.
Network connects all nodes.
Shared Resources can be files, databases, printers, or applications used collectively
by all nodes.
Real-Life Examples of Distributed Systems
Easy2Siksha.com
1. Internet and World Wide Web Millions of computers worldwide provide a single
seamless web experience.
2. Online Banking Systems Multiple servers ensure your transaction is safe, quick,
and reliable.
3. Cloud Computing Platforms Services like Google Drive or Dropbox store your data
on multiple servers for reliability.
4. Social Media Platforms Facebook, Instagram, or TikTok serve millions of users by
distributing workload among multiple servers.
Conclusion
In short, a Distributed System is like a team working together to achieve a goal that would
be impossible or inefficient for a single computer. It combines the power, flexibility, and
reliability of multiple computers, giving users the experience of a single, powerful system. Its
advantagesresource sharing, scalability, fault tolerance, speed, flexibility, and cost-
effectivenessmake it a backbone of modern computing, enabling everything from online
shopping to cloud services.
By understanding distributed systems, we can appreciate how complex digital tasks are
divided, coordinated, and completed seamlessly across the globealmost like magic, but
powered by teamwork!
2. What is the importance of CPU scheduling algorithm? Explain any two such algorithms.
Ans: Importance of CPU Scheduling Algorithms and Two Examples
󷈷󷈸󷈹󷈺󷈻󷈼 A Different Beginning
Imagine a busy doctor’s clinic. Patients are waiting outside, each with different needs
some just need a quick prescription, others require long check-ups, and a few are
emergencies. The doctor has only one consultation room (like a CPU), and patients (like
processes) are waiting in line.
Now the big question is: Who should the doctor see first?
If the doctor chooses randomly, some patients may wait too long. If the doctor always picks
the longest case, quick cases will get delayed. If the doctor always picks the shortest case,
emergencies may be ignored.
This is exactly the challenge faced by an Operating System (OS). The CPU is the doctor, and
processes are the patients. To manage this fairly and efficiently, the OS uses CPU scheduling
algorithms.
Easy2Siksha.com
󹺢 Importance of CPU Scheduling Algorithms
CPU scheduling is one of the most important tasks of an operating system. Its importance
can be understood through these points:
1. Maximizes CPU Utilization
o Ensures the CPU is always busy and not sitting idle.
2. Improves Throughput
o More processes are completed in less time.
3. Reduces Waiting Time
o Processes don’t spend unnecessary time waiting in the ready queue.
4. Reduces Turnaround Time
o The total time from process arrival to completion is minimized.
5. Improves Response Time
o Especially important in interactive systems where users expect quick
feedback.
6. Ensures Fairness
o Every process gets a fair chance to use the CPU.
󷷑󷷒󷷓󷷔 In short, CPU scheduling is like a traffic controller at a busy intersectionit decides who
moves first, ensuring smooth flow and avoiding chaos.
󺃱󺃲󺃳󺃴󺃵 Two CPU Scheduling Algorithms
Let’s now explore two important algorithms, explained like stories so they’re easy to
remember.
1. First Come First Serve (FCFS)
Concept:
The process that arrives first in the ready queue is executed first.
It works like a queue at a ticket counter: whoever comes first gets served first.
Example: Suppose three processes arrive with the following burst times (execution times):
Process
Burst Time
P1
5 ms
P2
3 ms
P3
8 ms
Execution Order: P1 → P2 → P3
Gantt Chart:
| P1 | P2 | P3 |
Easy2Siksha.com
0 5 8 16
Waiting Time:
P1 = 0
P2 = 5
P3 = 8
Average Waiting Time = (0 + 5 + 8) / 3 = 4.33 ms
Advantages:
Simple and easy to implement.
Fair for processes that arrive early.
Disadvantages:
Convoy Effect: A long process can delay all shorter ones behind it.
Not suitable for interactive systems.
󷷑󷷒󷷓󷷔 Analogy: Like a bus queueif the first person takes too long to buy tickets, everyone
else waits.
2. Shortest Job Next (SJN) / Shortest Job First (SJF)
Concept:
The process with the shortest burst time is executed first.
It minimizes average waiting time.
Example: Using the same processes:
Process
Burst Time
P1
5 ms
P2
3 ms
P3
8 ms
Execution Order: P2 → P1 → P3
Gantt Chart:
| P2 | P1 | P3 |
0 3 8 16
Waiting Time:
P2 = 0
P1 = 3
Easy2Siksha.com
P3 = 8
Average Waiting Time = (0 + 3 + 8) / 3 = 3.66 ms
Advantages:
Produces minimum average waiting time.
Efficient for batch processing.
Disadvantages:
Requires knowledge of burst time in advance (not always possible).
Can cause starvation if short processes keep arriving and long ones never get CPU.
󷷑󷷒󷷓󷷔 Analogy: Like a doctor treating patients with the shortest cases firstquick check-ups
are done fast, but a patient with a long case may wait forever.
󹵍󹵉󹵎󹵏󹵐 Diagram: FCFS vs SJF
This diagram shows how the order of execution changes waiting times.
󷇮󷇭 Why These Algorithms Matter
FCFS is fair and simple but can be inefficient.
SJF is efficient but can be unfair to long processes.
In real systems, a balance is neededsometimes achieved by hybrid algorithms like
Round Robin or Priority Scheduling.
󽆪󽆫󽆬 Conclusion
Easy2Siksha.com
CPU scheduling is the art of deciding which process gets the CPU next. Without it, the
system would be chaotic, with some processes hogging resources while others starve.
FCFS is like a queue at a ticket countersimple but can cause delays.
SJF is like treating the shortest cases firstefficient but can ignore long ones.
For students, this makes CPU scheduling easy to remember: it’s just like managing a waiting
room. For examiners, this answer is enjoyable because it doesn’t just list definitionsit tells
a story, uses analogies, and shows calculations with diagrams.
SECTION-B
3. What is a Sernaphore? Explain its use with an example.
Ans: Imagine a small village with a single narrow bridge over a river. The bridge is so
narrow that only one vehicle can cross at a time. Naturally, if two vehicles try to cross the
bridge from opposite sides at the same time, they will crash into each other! To prevent this
chaos, the village decides to appoint a bridge keeper. This bridge keeper has a simple rule:
only allow one vehicle on the bridge at a time. When the bridge is free, he signals a vehicle to
cross; when the vehicle finishes crossing, he allows the next one to go.
This scenario is a simple, real-life analogy of a semaphore in computer science.
What is a Semaphore?
A semaphore is like that bridge keeper, but instead of managing cars on a bridge, it manages
access to shared resources in a computer program.
In more formal terms:
A semaphore is a synchronization tool used in concurrent programming.
It is used to control access to a shared resource by multiple processes (or threads) to
prevent conflicts, crashes, or inconsistent data.
Think of it as a flag or counter that tells processes whether they can enter a critical
section (a part of the program where the shared resource is being used) or if they have
to wait.
In other words, a semaphore ensures order and prevents chaos when multiple processes try
to use the same resource simultaneously.
Types of Semaphores
Easy2Siksha.com
There are two main types:
1. Binary Semaphore (Mutex):
o Works like a traffic light with only two states: green (1) or red (0).
o Only one process can enter the critical section at a time.
o Often used to prevent two threads from modifying a shared variable
simultaneously.
2. Counting Semaphore:
o Has a counter that can be greater than 1.
o It allows a specific number of processes to access the resource simultaneously.
o For example, if there are 3 identical printers, a counting semaphore initialized
to 3 allows up to 3 processes to print at the same time.
How Does a Semaphore Work?
A semaphore has two main operations:
1. Wait (P or down operation):
o A process that wants to use the resource decreases the semaphore value by 1.
o If the value becomes negative, the process waits because the resource is not
available.
2. Signal (V or up operation):
o When a process finishes using the resource, it increases the semaphore value
by 1.
o This signals that the resource is now available for other processes.
Let’s connect this back to our bridge example:
The bridge keeper allows one car to cross (Wait operation).
When the car finishes crossing, the bridge keeper signals the next car to go (Signal
operation).
Example of Semaphore in Real Life
Let’s take a classic computing example: a printer in an office.
Imagine an office with one printer and multiple employees who want to print documents. If
two employees send print commands at the same time, the printer could mix the pages,
causing chaos!
Here, the printer is a shared resource, and access to it must be controlled.
We introduce a binary semaphore called printerSemaphore, initially set to 1.
Employee A wants to print:
o Checks the semaphore: it’s 1 → allowed to print → decreases semaphore to 0.
Employee B also wants to print:
Easy2Siksha.com
o Checks the semaphore: it’s 0 → must wait until Employee A finishes.
Employee A finishes printing → signals the semaphore → value becomes 1.
Employee B is now allowed to print.
This prevents printing errors and ensures orderly access.
Diagram to Visualize Semaphore
Here’s a simple diagram for clarity:
+-----------------+
| Shared Resource |
| (Printer) |
+-----------------+
^
| Signal (V)
+---+---+
| |
+---+-------+---+
| Semaphore |
| (Counter) |
+---+-------+---+
| Wait (P)
v
+-----------------+
| Process/Thread |
| Employee A, B.. |
+-----------------+
Processes wait if the resource is unavailable.
Signal indicates that the resource is free.
Semaphore ensures only allowed processes access the resource at a time.
Why Are Semaphores Important?
1. Prevent Race Conditions:
o A race condition occurs when multiple processes access and modify shared
data simultaneously.
o Semaphores ensure orderly access and prevent conflicts.
2. Control Resource Sharing:
o Helps manage limited resources like printers, CPU cycles, or database
connections.
3. Avoid Deadlock and Starvation:
o Proper use of semaphores prevents processes from waiting forever.
4. Efficient Synchronization:
o Semaphores are lightweight and effective for synchronizing processes without
complex coding.
Easy2Siksha.com
Conclusion
Think of a semaphore as a friendly traffic controller in the computer world. It patiently
ensures that every process gets its turn to access shared resources, just like the bridge keeper
lets cars cross one by one. Without semaphores, shared resources in programs would be
chaotic, leading to crashes, data corruption, and confusion.
In short, semaphores bring order to concurrency. They are the silent guardians that keep the
flow of processes smooth, ensuring that every process or thread knows when to wait and
when to go.
4. Explain the concept of swapping with diagram.
Ans: Swapping in Operating Systems
󷈷󷈸󷈹󷈺󷈻󷈼 A Different Beginning
Imagine you’re sitting at a study desk. The desk is small—it can only hold two or three books
at a time. But you have a whole shelf full of books you need to study. What do you do?
You keep the most important books on the desk, and when you need another one, you put
one book back on the shelf and bring the new one to the desk. This way, even though your
desk is small, you can still study all your books by swapping them in and out.
This is exactly what happens inside a computer. The desk is the RAM (main memory), the
shelf is the secondary storage (hard disk/SSD), and the books are the processes. Since RAM
is limited, the operating system uses a technique called swapping to move processes
between RAM and disk so that multiple programs can run smoothly.
󹺢 Definition of Swapping
Swapping is a memory management technique in which a process is temporarily moved out
of the main memory (RAM) to secondary storage (disk) and later brought back into RAM for
execution.
Swap Out: Moving a process from RAM to disk.
Swap In: Bringing a process back from disk to RAM.
󷷑󷷒󷷓󷷔 In short: Swapping allows the system to run more processes than the RAM alone can
handle.
󽁌󽁍󽁎 How Swapping Works
1. When RAM is full and a new process needs to run, the OS selects a process in RAM
that is idle or of lower priority.
2. That process is swapped out to the hard disk (into a special area called swap space).
Easy2Siksha.com
3. The new process is swapped in to RAM and starts executing.
4. Later, when the swapped-out process is needed again, it is brought back into RAM,
possibly replacing another process.
This cycle continues, ensuring that the CPU always has something to execute.
󹵍󹵉󹵎󹵏󹵐 Diagram of Swapping
Swap Out: Process moved from RAM → Disk
Swap In: Process moved from Disk → RAM
Or, in a more visual way:
Here, process P2 was swapped out to make space for P4.
󷇮󷇭 Real-Life Analogy
Think of a restaurant kitchen with limited counter space. The chef can only keep a few
dishes on the counter at once. When a new order comes in, the chef moves one dish back to
the fridge (swap out) and brings the new dish to the counter (swap in). This way, the kitchen
can handle more orders than the counter space allows.
󽆪󽆫󽆬 Advantages of Swapping
1. Efficient CPU Utilization
Easy2Siksha.com
o CPU never sits idle because there’s always a process ready to execute.
2. Supports Multiprogramming
o More processes can be run than the size of RAM would normally allow.
3. Flexibility
o High-priority processes can be swapped in quickly by swapping out low-
priority ones.
4. Memory Management
o Makes better use of limited RAM.
󽁔󽁕󽁖 Disadvantages of Swapping
1. Performance Overhead
o Moving processes between RAM and disk takes time, slowing down the
system.
2. Disk Wear and Tear
o Frequent swapping can reduce the lifespan of storage devices.
3. Latency
o If too many processes are swapped in and out, the system spends more time
swapping than executing (called thrashing).
4. I/O Bottleneck
o Since disk access is slower than RAM, performance can degrade.
󷘹󷘴󷘵󷘶󷘷󷘸 Example Scenario
Suppose we have 3 processes:
P1 (high priority)
P2 (low priority)
P3 (medium priority)
RAM can only hold 2 processes at a time.
Initially, P2 and P3 are in RAM.
Suddenly, P1 arrives (high priority).
The OS swaps out P2 (low priority) to disk and swaps in P1.
Now RAM has P1 and P3, while P2 waits on disk.
This ensures that the CPU executes the most important tasks first.
󹵍󹵉󹵎󹵏󹵐 Flow of Swapping
Easy2Siksha.com
󷇮󷇭 Why Swapping is Important
Without swapping, only a limited number of processes could run, restricted by RAM
size.
With swapping, the OS can handle more processes, improve CPU utilization, and
ensure high-priority tasks get executed.
Even though it may slow down performance, it’s a practical compromise to balance
limited memory with user demands.
󽆪󽆫󽆬 Conclusion
Swapping is like the art of jugglingkeeping multiple balls (processes) in play even though
your hands (RAM) can only hold a few at a time. By moving processes in and out of memory,
the operating system ensures that the CPU is always busy, users can run multiple programs,
and high-priority tasks get the attention they deserve.
Swap Out: Move a process from RAM to disk.
Swap In: Bring it back when needed.
Advantage: Better CPU utilization and multiprogramming.
Disadvantage: Performance overhead and possible thrashing.
For students, this makes swapping easy to remember: it’s just like managing a small desk
with too many books. For examiners, this answer is enjoyable because it doesn’t just define
swappingit tells a story, uses analogies, and includes diagrams.
Easy2Siksha.com
SECTION-C
5. Explain demand paging.
Ans: Imagine you are the manager of a huge library filled with millions of books. Now,
suppose a student walks in and wants to read just a few chapters from a couple of books.
Would you bring all the books from the shelves to their table immediately? That would be
time-consuming, take up unnecessary space, and make the library messy. Instead, you bring
only the chapters or books the student requests, when they need them. This smart strategy
saves space, time, and effort.
This, in a way, is what demand paging does in computer systems.
What is Demand Paging?
In a computer, programs are stored on the disk (secondary memory) and need to be
executed in the RAM (primary memory). RAM is fast but limited, while disk storage is
slower but vast. Instead of loading the entire program into RAM at once, demand paging
loads only the parts (pages) of a program that are actually needed, when they are needed.
A page is like a small block or chunk of the program. Think of it as a chapter of the book in
our library example.
So, demand paging = “bring only what you need, when you need it” for program pages in
memory.
How Does Demand Paging Work?
Let’s walk through a story of the computer executing a program:
1. The program arrives:
The program sits on the disk, waiting to run. Initially, no pages are loaded into RAM.
2. Page request:
When the CPU tries to execute an instruction that is not yet in RAM, a page fault
occurs. A page fault is like the student asking for a chapter you haven’t brought yet.
It signals: “I need this page now!”
3. Fetching the page:
The operating system (OS) locates the required page on the disk and loads it into
RAM. If RAM is full, the OS may remove another page (least recently used or some
other strategy) to make space.
4. Continue execution:
After the required page is loaded, the CPU resumes executing the instruction. Only
the needed pages occupy RAM, saving memory space.
Easy2Siksha.com
5. Repeat as needed:
Every time a program tries to access a page not in RAM, a page fault happens, and
the required page is fetched from disk.
Example to Understand Demand Paging
Let’s take a simple example:
Suppose you have a program with 5 pages: P1, P2, P3, P4, P5.
Initially, RAM is empty.
CPU starts executing P1 → P3 → P4.
Step-by-step execution with demand paging:
1. CPU requests P1 → Page fault occurs → OS loads P1 into RAM.
2. CPU requests P3 → Page fault → OS loads P3.
3. CPU requests P4 → Page fault → OS loads P4.
Notice that P2 and P5 are never loaded, because the program didn’t use them. That’s
memory efficiency at work.
Advantages of Demand Paging
1. Efficient Memory Usage:
Only the needed pages are loaded, freeing RAM for other programs.
2. Faster Program Start:
Programs can start running before fully loading, reducing wait time.
3. Support for Large Programs:
Even if a program is larger than RAM, demand paging allows it to execute by loading
only parts at a time.
4. Better Multiprogramming:
Multiple programs can share RAM efficiently because only the active pages of each
program occupy memory.
Disadvantages of Demand Paging
1. Page Fault Overhead:
Every page fault causes a delay while fetching the page from disk. Too many page
faults can slow down execution (called thrashing).
2. Complex OS Management:
The OS must track pages and handle page replacement, which adds complexity.
Easy2Siksha.com
Diagram of Demand Paging
Here’s a simple representation:
Explanation: CPU asks RAM for a page. If the page is not in RAM, OS fetches it from disk.
Real-Life Analogy (Story-Like Summary)
Imagine a chef in a busy kitchen. Instead of preparing every dish for the day, the chef
prepares only the dishes ordered by customers. This saves ingredients (memory), avoids
waste, and ensures fresh dishes (efficient execution). Similarly, demand paging prepares
and loads only the pages that are requested, making the computer system efficient, faster,
and smarter.
Key Terms to Remember
Page: A fixed-size block of a program.
Page Fault: When a page is not in RAM, triggering a fetch from disk.
Lazy Loading: Another name for demand paging; load pages only when required.
Thrashing: Excessive page faults that slow down the system.
In conclusion, demand paging is like a smart librarian or a careful chef: it doesn’t waste
memory, saves time, and ensures the system runs smoothly. Programs start quickly, RAM is
used efficiently, and even huge programs can be executed in small memory spaces.
Easy2Siksha.com
This story-like approach helps visualize how demand paging keeps memory organized and
execution efficientmaking it one of the most practical techniques in modern operating
systems.
6. Why do we need disk and disk scheduling algorithm? What is disk reliability?
Ans: Disk Scheduling Algorithms and Disk Reliability
󷈷󷈸󷈹󷈺󷈻󷈼 A Different Beginning
Imagine a huge library with only one librarian. Every second, dozens of students rush in with
requests:
“Please bring me the book from shelf 20!”
“I need the one from shelf 150!”
“Can you fetch the book from shelf 70?”
Now, the librarian has only one pair of legs. If he runs back and forth randomly, he’ll waste
time and energy. Instead, he needs a strategya way to decide the order of serving
requests so that everyone gets their books quickly and fairly.
This is exactly what happens inside a computer’s disk. The disk head (like the librarian)
moves across tracks to read or write data. Since multiple processes request data at the
same time, the operating system uses disk scheduling algorithms to decide the order. And
while doing this, we also care about disk reliability—making sure the disk doesn’t fail and
lose our precious data.
󹺢 Why Do We Need Disk Scheduling Algorithms?
Disks are one of the slowest parts of a computer system compared to CPU and RAM.
Accessing data on a disk involves three main delays:
1. Seek Time Time taken for the disk arm to move to the correct track.
2. Rotational Latency Time taken for the desired sector to rotate under the
read/write head.
3. Transfer Time Time taken to actually read or write the data.
󷷑󷷒󷷓󷷔 Out of these, seek time is the most expensive. If the disk head keeps jumping randomly,
performance drops drastically.
That’s why we need disk scheduling algorithms:
To minimize seek time.
To maximize throughput (number of requests served per unit time).
To reduce response time for users.
Easy2Siksha.com
To ensure fairness (no request waits forever).
In short, disk scheduling is like traffic management at a busy intersectionit prevents chaos
and ensures smooth flow.
󺃱󺃲󺃳󺃴󺃵 Common Disk Scheduling Algorithms
Let’s meet some of the famous “strategies” our librarian (disk head) can use:
1. First Come First Serve (FCFS)
Requests are served in the order they arrive.
Simple and fair, but can cause long delays if requests are far apart.
Analogy: Like serving students in the order they entered the library, even if one
wants a book on the ground floor and the next wants one on the top floor.
2. Shortest Seek Time First (SSTF)
The disk head moves to the request closest to its current position.
Reduces average seek time but may cause starvation (far requests wait too long).
Analogy: The librarian serves the student whose requested book is nearest to his
current location.
3. SCAN (Elevator Algorithm)
The disk head moves in one direction, serving all requests in its path, then reverses.
Prevents starvation and ensures fairness.
Analogy: Like an elevator that goes up, serving all floors, then comes down.
4. C-SCAN (Circular SCAN)
Similar to SCAN, but when the head reaches the end, it jumps back to the beginning
without serving requests on the return trip.
Provides uniform wait times.
5. LOOK and C-LOOK
Variants of SCAN and C-SCAN, but the head only goes as far as the last request
instead of the physical end of the disk.
󹵍󹵉󹵎󹵏󹵐 Diagram: Disk Scheduling Example
Suppose the disk head is at track 50, and requests are: 20, 150, 90, 70, 30, 60.
FCFS Order: 20 → 150 → 90 → 70 → 30 → 60 SSTF Order: 60 → 70 → 90 → 30 → 20 → 150
SCAN Order: 60 → 70 → 90 → 150 → (reverse) → 30 → 20
Easy2Siksha.com
This shows how different algorithms change the path of the disk head and affect total
movement.
󷇮󷇭 What is Disk Reliability?
Now, imagine if the librarian suddenly drops a book into water and it’s ruined. In computing,
this is like a disk failure.
Disk reliability refers to the ability of a disk to store and retrieve data correctly over time
without failure. Since disks hold critical data, reliability is as important as speed.
󽁌󽁍󽁎 Factors Affecting Disk Reliability
1. Mechanical Wear and Tear
o Hard disks have moving parts (spinning platters, moving heads). Over time,
these can fail.
2. Power Failures
o Sudden power loss can corrupt data.
3. Environmental Conditions
o Heat, dust, or shocks can damage disks.
4. Bad Sectors
o Portions of the disk may become unreadable.
󷄧󼿒 Techniques to Improve Disk Reliability
1. Redundancy (RAID Systems)
o RAID (Redundant Array of Independent Disks) uses multiple disks to store
data with redundancy.
o Example: RAID 1 mirrors data on two disksif one fails, the other still has it.
2. Error Detection and Correction
o Disks use checksums and ECC (Error Correcting Codes) to detect and fix
errors.
3. Backups
o Regular backups ensure data can be restored even if the disk fails.
4. SMART Monitoring
o Modern disks use Self-Monitoring, Analysis, and Reporting Technology
(SMART) to predict failures.
󹵍󹵉󹵎󹵏󹵐 Diagram: Disk Reliability with RAID
Easy2Siksha.com
󷘹󷘴󷘵󷘶󷘷󷘸 Why Both Scheduling and Reliability Matter
Scheduling ensures the disk works efficiently, serving requests quickly.
Reliability ensures the disk works safely, protecting data from loss.
󷷑󷷒󷷓󷷔 Together, they make sure that the system is not only fast but also trustworthy.
󽆪󽆫󽆬 Conclusion
Disks are the “libraries” of computers, storing all our data. But since they are slower than
CPUs and RAM, we need disk scheduling algorithms like FCFS, SSTF, SCAN, and C-SCAN to
minimize delays and maximize efficiency. At the same time, we must ensure disk reliability
through redundancy, error correction, and backupsbecause speed is useless if the data
itself is lost.
For students, this makes the concept easy to remember: the disk is like a librarian managing
requests and protecting books. For examiners, this answer is enjoyable because it doesn’t
just define termsit tells a story, uses analogies, and includes diagrams.
SECTION-D
7. What is a deadlock? How does it occur? What can we do to handle it?
Ans: Imagine a busy railway junction, where multiple trains are moving on tracks, and there
are several intersections. Now, picture this scenario: Train A is on Track 1 and wants to
move forward, but the intersection is blocked because Train B from Track 2 is occupying it.
Meanwhile, Train B cannot move because Train C is blocking the next section. Train C, in
turn, waits for Train A to move so that it can proceed. Each train is waiting for another to
Easy2Siksha.com
give way, and no train can move forward. This traffic jam is a perfect real-life analogy of a
deadlock in computing.
1. What is a Deadlock?
In the world of computer systems, a deadlock occurs when two or more processes
(programs or tasks) are stuck waiting for resources held by each other, such that none of
them can proceed.
Let’s put it simply: imagine two friends, Alice and Bob, who are trying to borrow each
other’s books. Alice has Book 1 and wants Book 2, while Bob has Book 2 and wants Book 1.
Neither can proceed because each is waiting for the other. That’s exactly what a deadlock is
in a computing environment.
In technical terms:
A deadlock is a situation in a multiprogramming environment where two or more processes
are blocked forever because each is holding a resource and waiting for another resource
held by another process.
2. How Deadlock Occurs
Deadlocks occur in systems that have shared resources (like printers, CPU, memory, or
files). To understand this, let’s explore the four necessary conditions for deadlock, as
proposed by Coffman et al.:
1. Mutual Exclusion:
o A resource can only be used by one process at a time.
o Example: A printer cannot print two documents at once; only one process can
access it.
2. Hold and Wait:
o A process is holding at least one resource and waiting to acquire more
resources that are currently held by others.
o Example: Process P1 is holding a disk drive and waiting for a printer, while P2
is holding the printer and waiting for the disk drive.
3. No Preemption:
o Resources cannot be forcibly taken from a process; they can only be released
voluntarily.
o Example: You cannot snatch a file from a process; it has to finish using it or
release it.
4. Circular Wait:
o A circular chain of processes exists, where each process waits for a resource
held by the next process in the chain.
o Example: P1 → P2 → P3 → P1 (each waiting for the next process to release a
resource).
Easy2Siksha.com
If all four conditions occur together, a deadlock happens.
Diagrammatically:
Here:
P1 holds R1 and waits for R2.
P2 holds R2 and waits for R1.
Neither can proceed → deadlock.
3. Real-Life Analogy
Deadlocks are not just a computer phenomenon; they happen in real life too:
Dining Philosophers Problem: Five philosophers sit around a table with a fork
between each pair. To eat, a philosopher needs two forks. If each philosopher picks
up one fork at the same time, everyone waits indefinitely for the other fork. This is a
classic deadlock scenario.
Traffic Jam: As described in the train example. Cars waiting at an intersection
without anyone yielding, blocking each other forever.
Bank Transaction Deadlock: Two ATMs are trying to transfer money between two
accounts. ATM1 locks Account A and waits for Account B, ATM2 locks Account B and
waits for Account A. Both transactions freeze.
4. Methods to Handle Deadlock
Deadlock handling is a crucial topic in operating systems. There are four main strategies:
A. Deadlock Prevention
Easy2Siksha.com
The idea is to prevent one or more of the necessary conditions for deadlock.
Mutual Exclusion: Make some resources shareable whenever possible. For instance,
read-only files can be accessed by multiple processes simultaneously.
Hold and Wait: Require processes to request all resources at once. If not all are
available, release any held resources.
No Preemption: Allow the system to preempt resources from processes if needed.
For example, suspend a process and take back its memory.
Circular Wait: Impose an ordering on resources and require processes to request
resources in a fixed order.
Pros: Deadlock is avoided completely.
Cons: Can lead to resource underutilization, as processes may wait unnecessarily.
B. Deadlock Avoidance
Here, the system analyzes resource requests in advance and ensures that granting a
request does not lead to an unsafe state.
The Banker’s Algorithm is the most famous deadlock avoidance method.
Example: Imagine a banker with limited cash. If a customer requests money, the
banker only gives it if he can still satisfy all other customers’ maximum needs
without running out.
Pros: More efficient than prevention, as it allows higher resource utilization.
Cons: Requires knowledge of maximum resource requirements, which may not always be
possible.
C. Deadlock Detection and Recovery
Sometimes, deadlocks are allowed to occur, and the system detects them.
1. Detection:
o The system maintains a resource allocation graph.
o If there’s a cycle in the graph, deadlock exists.
2. Recovery:
o Terminate processes: Kill one or more processes to break the deadlock.
o Preempt resources: Take resources from some processes and allocate them
to others.
Pros: No need to restrict resources unnecessarily.
Cons: Can lead to process termination, loss of data, and system overhead.
D. Ignoring Deadlock
In some systems (like desktop OSes), deadlock is rare, so the OS just ignores it.
If it occurs, the user may have to restart the system or the affected application.
Easy2Siksha.com
Pros: No overhead in prevention/detection.
Cons: Risky in critical systems, e.g., banking or airline reservation systems.
5. Diagram for Deadlock Handling
Resource Allocation Graph (Deadlock Detection):
Cycle detected → Deadlock
6. Summary Story-Like Insight
Think of deadlock as a group of friends in a candy shop. Each friend wants a different candy,
but each candy is already held by another friend, and no one is willing to let go. Unless
someone decides to release their candy voluntarily, no one gets anything, and the situation
becomes a mess.
In computers, deadlock is like that candy dilemma. Systems need strategies to prevent,
avoid, or recover from deadlock to keep processes running smoothly. Sometimes, we
carefully plan (avoidance), sometimes we enforce strict rules (prevention), sometimes we
let it happen and fix it (detection & recovery), and sometimes we take the risk and ignore it.
Key Points to Remember
1. Deadlock is a permanent waiting situation in processes.
2. Four necessary conditions: Mutual exclusion, Hold and wait, No preemption,
Circular wait.
3. Can occur in multi-tasking and multi-processing environments.
4. Handling strategies: Prevention, Avoidance, Detection & Recovery, Ignoring.
Easy2Siksha.com
5. Diagrams like Resource Allocation Graphs help visualize deadlocks.
Deadlock is not just an abstract concept—it’s real-life scenarios applied to computing, and
understanding it properly is like learning how to manage traffic jams, library books, or
shared printers efficiently.
8. Explain the concept of deadlock avoidance.
Ans: Deadlock Avoidance in Operating Systems
󷈷󷈸󷈹󷈺󷈻󷈼 A Different Beginning
Imagine four friends sitting at a round dining table. Each has a plate of noodles, but there
are only four chopsticks placed between themone between each pair. To eat, each friend
needs two chopsticks.
Now, suppose each friend picks up the chopstick on their right and waits for the one on
their left. What happens?
Nobody can eat. Everyone is waiting for the other to release a chopstick. This is a
deadlocka situation where progress halts because everyone is waiting for resources held
by others.
But what if the friends were smarter? What if they had a rule: “Before picking up chopsticks,
check if both are available. If not, wait.” This way, at least one friend can eat fully, finish,
and release both chopsticks.
This clever rule is exactly what deadlock avoidance is all about.
󹺢 What is Deadlock Avoidance?
Deadlock avoidance is a strategy used in operating systems to ensure that processes never
enter a deadlock state.
Instead of letting processes request resources blindly, the system carefully checks
whether granting a request could lead to a deadlock in the future.
If it’s safe, the request is granted. If not, the process must wait.
󷷑󷷒󷷓󷷔 In short: Deadlock avoidance is like a cautious driver who checks the road ahead
before moving into an intersection, ensuring no traffic jam will occur.
󽁌󽁍󽁎 Conditions for Deadlock
Before we dive deeper, let’s recall the four conditions that must hold simultaneously for a
deadlock to occur:
Easy2Siksha.com
1. Mutual Exclusion Only one process can use a resource at a time.
2. Hold and Wait A process is holding one resource while waiting for another.
3. No Preemption Resources cannot be forcibly taken away.
4. Circular Wait A cycle of processes exists, each waiting for a resource held by the
next.
Deadlock avoidance works by ensuring that the system never allows these conditions to
align in a dangerous way.
󺃱󺃲󺃳󺃴󺃵 How Deadlock Avoidance Works
The operating system uses algorithms to decide whether granting a resource request will
keep the system in a safe state.
Safe State: A state where there is at least one sequence of process execution that
allows all processes to finish without deadlock.
Unsafe State: A state where deadlock might occur (not guaranteed, but risky).
󷷑󷷒󷷓󷷔 Deadlock avoidance ensures the system always stays in a safe state.
󹵍󹵉󹵎󹵏󹵐 Diagram: Safe vs Unsafe State
󷇮󷇭 Techniques for Deadlock Avoidance
1. Banker’s Algorithm (Most Famous)
Developed by Edsger Dijkstra, it works like a cautious banker.
A banker will only give a loan if they are sure they can still satisfy all customers
eventually.
Similarly, the OS grants a resource request only if it can guarantee that all processes
can finish in some order.
Easy2Siksha.com
Steps:
1. Each process declares the maximum number of resources it may need.
2. The system checks if granting the request keeps the system in a safe state.
3. If yes → allocate resources.
4. If no → make the process wait.
Example:
Suppose there are 10 printers in total.
Process P1 may need 7, P2 may need 5, P3 may need 3.
If P1 requests 6 printers, the system checks: “If I give P1 six, can I still satisfy P2 and
P3 eventually?”
If yes, grant. If no, deny.
󷷑󷷒󷷓󷷔 This prevents deadlock by never entering an unsafe state.
2. Resource Allocation Graph (RAG) with Claim Edges
A Resource Allocation Graph shows processes and resources as nodes.
Normally, a cycle in the graph means possible deadlock.
In avoidance, we add claim edges (dotted lines) to represent future requests.
Before granting a request, the system checks if converting a claim edge to an
allocation edge would create a cycle.
If yes → deny request. If no → grant request.
󷷑󷷒󷷓󷷔 This ensures the system never allows circular wait.
󹵍󹵉󹵎󹵏󹵐 Diagram: Resource Allocation Graph
If P1 requests R1 and P2 requests R2 simultaneously:
System checks for cycle before granting.
󷄧󼿒 Advantages of Deadlock Avoidance
1. No Deadlocks The system never enters a deadlock state.
Easy2Siksha.com
2. Efficient Resource Use Resources are allocated safely.
3. Predictability The system can guarantee safe execution sequences.
󽆱 Disadvantages of Deadlock Avoidance
1. Advance Knowledge Required Processes must declare maximum resource needs in
advance (not always realistic).
2. Complexity Algorithms like Banker’s are computationally heavy.
3. Reduced Resource Utilization Sometimes resources are left idle just to maintain
safety.
4. Not Always Practical In dynamic systems, predicting future requests is difficult.
󷘹󷘴󷘵󷘶󷘷󷘸 Real-Life Analogy
Think of an amusement park ride with limited seats.
Each group must declare the maximum number of seats they might need.
The operator only lets groups board if they can guarantee that eventually, all groups
will get their turn.
This may mean some groups wait longer, but nobody is left stuck forever.
That’s deadlock avoidance in action.
󽆪󽆫󽆬 Conclusion
Deadlock avoidance is like a cautious planner—it doesn’t just react to problems, it foresees
them and prevents them. By ensuring the system always stays in a safe state, techniques
like Banker’s Algorithm and Resource Allocation Graphs make sure processes don’t end up
in a deadlock.
Definition: Avoids unsafe states that may lead to deadlock.
Key Idea: Always check before granting resources.
Techniques: Banker’s Algorithm, RAG with claim edges.
Pros: No deadlocks, predictable execution.
Cons: Requires advance knowledge, can be complex.
For students, this makes deadlock avoidance easy to remember: it’s like friends sharing
chopsticks wisely, or a banker lending money cautiously. For examiners, this answer is
enjoyable because it blends technical clarity with storytelling, diagrams, and analogies.
“This paper has been carefully prepared for educational purposes. If you notice any mistakes or
have suggestions, feel free to share your feedback.”